Curvelet-domain multiple elimination with sparseness constraints

نویسندگان

  • Felix J. Herrmann
  • Eric Verschuur
چکیده

Predictive multiple suppression methods consist of two main steps: a prediction step, in which multiples are predicted from the seismic data, and a subtraction step, in which the predicted multiples are matched with the true multiples in the data. The last step appears crucial in practice: an incorrect adaptive subtraction method will cause multiples to be sub-optimally subtracted or primaries being distorted, or both. Therefore, we propose a new domain for separation of primaries and multiples via the Curvelet transform. This transform maps the data into almost orthogonal localized events with a directional and spatialtemporal component. The multiples are suppressed by thresholding the input data at those Curvelet components where the predicted multiples have large amplitudes. In this way the more traditional filtering of predicted multiples to fit the input data is avoided. An initial field data example shows a considerable improvement in multiple suppression. Introduction In complex areas move-out filtering multiple suppression techniques may fail because underlying assumptions are not met. Several attempts have been made to address this problem by either extending move-out discrimination methods towards 3D complexities (e.g. by introducing apex-shifted hyperbolic transforms [9]) or by coming up with matching techniques in the wave-equation based predictive methods [see e.g. 19, 1]. Least-squares matching the predicted multiples in time and space overlapping windows, [18] provides a straightforward subtraction method, where the predicted multiples are matched to the true multiples for 2-D input data. Unfortunately, this matching procedure fails when the underlying 2D assumption are severely violated. There have been several attempts to address this issue and the proposed solutions range from including surrounding shot positions [13] to methods based on model[16], data-driven [12] time delays and separation of predicted multiples into (in)-coherent parts [14]. Even though these recent advances in adaptive subtraction and other techniques have improved the attenuation of multiples, these methods continue to suffer from (i) a relative strong sensitivity to the accuracy of the predicted multiples; (ii) creation of spurious artifacts or worse (iii) a possible distortions of the primary energy. For these situations, subtraction techniques based on a different concept are needed to complement the processor’s tool box. The method we are proposing here holds the middle between two complementary approaches common in multiple elimination: prediction in combination with subtraction and filtering [17, 9]. Whereas the first approach aims to predict the multiples and then subtract, the second approach tries to find a domain in which the primaries and multiples separate, followed by some filtering operation and reconstruction. Our method is not distant from either since it uses the predicted multiples to non-linearly filter data in a domain spanned by almost orthogonal and local basis functions. We use the recently developed Curvelet transform [see e.g. 3], that decomposes data into basis functions that not only obtain optimal sparseness on the coefficients and hence reduce the dimensionality of the problem but which are also local in both location and angle/dip, facilitating the definition of non-linear estimators based on thresholding. Main assumption of this proposal is that multiples and primaries have locally a different temporal, spatial and dip behavior, and therefore map into different areas in the Curvelet domain. Multiples give rise to large Curvelet coefficients in the input and these coefficients can be muted by our estimation procedure when the threshold is set according to the Curvelet transform of the predicted multiples. As such, our suppression technique has at each location in the transformed domain one parameter, namely the threshold yielded by the predicted multiple, beyond which the input data is suppressed. In that sense, our procedure is similar to the ones proposed by [20] and [17], although the latter use the non-localized FK/Radon domains for their separation while we use localized basis functions and non-linear estimation by thresholding. Non-locality and non-optimality in their approximation renders the first filtering techniques less effective because primaries and multiples will still have a considerable overlap. The Curvelet transform is able to make a local discrimination between interfering events with different temporal and spatial characteristics. The inverse problem underlying adaptive subtraction Virtually any linear problems in seismic processing and imaging can be seen as a special case of the following generic problem: how to obtain m form data d in the presence of (coherent) noise n: d = Km+ n, (1) in which, for the adaptive subtraction problem,K andK∗, represent the respective time convolution and correlation with an effective waveletΦ that minimizes the following functional [see e.g. 8] min Φ = ‖d−Φ t ∗m‖p, (2) where d represents data with multiples; m the predicted multiples and n̂ = minΦ ‖d−Φ ∗m‖p the primaries only data, obtained by minimizing the Lp-norm. For p = 2, Eq. 2 corresponds to a standard leastsquares problem where the energy of the primaries is minimized. Seismic data and images are notoriously non-stationary and nonGaussian, which may give rise to a loss of coherent events in the denoised component when employing the L2-norm. The non-stationarity and non-Gaussianity have with some success been addressed by solving Eq. 2 within sliding windows and for p = 1 [8]. In this paper, we follow a different strategy replacing the above variational problem by m̂ : min m 1 2 ‖C n (d−m) ‖ 2 2 + μJ(m). (3) In this formulation, the necessity of estimating a wavelet, Φ has been omitted. Data with multiples is again represented by d but now m is the primaries only model on which an additional penalty function is imposed, such as a L1-norm, i.e. J(m) = ‖m‖1. The noise term is now given by the predicted multiples and this explains the emergence of the covariance operator, whose kernel is given by Cn = E{nn }. (4) Question now is can we find a basis-function representation which is (i) sparse and local on the model (primaries only data), m, data, d and predicted multiples n and (ii) almost diagonalizes the covariance of the model and the predicted multiples. Answer is affirmative, even though the condition of locality is non-trivial. For instance, decompositions in terms of principle components/Karhuhnen-Loéve-basis (KL) or independent components may diagonalize the covariance operators but these decomposition are generally not local and not necessary the same for both multiples and primaries. By selecting a basis-function decomposition that is local, sparse and almost diagonalizing (i.e. Cñ ≈ diag{Cñ} = diag{Γ 2 ñ}), we find a solution for the above denoising problem that minimizes (mini) the maximal (max) mean-square-error given the worst possible Bayesian prior for the model [15, 7]. This minimax estimator minimizes m̂0 : min m 1 2 ‖Γ−1 ñ ( d̃− m̃ ) ‖2 (5) by a simple diagonal non-linear thresholding operator m̂ = BΓΘλ ( Γ−1Bd ) = BΘλΓ ( d̃ ) . (6) The threshold is set to λdiag{Γ} (we dropped the subscript ñ) with λ an additional control parameter, which sets the confidence interval (e.g. the confidence interval is 95 % for λ = 3) (de)-emphasizing the thresholding. This thresholding operation removes the bulk of the multiples by shrinking the coefficients that are smaller then the noise to zero. The symbol † indicates (pseudo)-inverse, allowing us to use Frames rather then orthonormal bases. Before applying this thresholding to actual data, let us first focus on the selection of the appropriate basis-function decomposition that accomplishes the above task of replacing the adaptive subtraction problem given in Eq. 2 by its diagonalized counterpart (cf. Eq. 6). The basis functions Curvelets as proposed by [3], constitute a relatively new family of non-separable wavelet bases that are designed to effectively represent seismic data with reflectors that generally tend to lie on piece-wise smooth curves. This property makes Curvelets suitable to represent events in seismic whether these are located in shot records or time slices. For these type of signals, Curvelets obtain nearly optimal sparseness, because of (i) the rapid decay for the reconstruction error as a function of the largest coefficients; (ii) the ability to concentrate the signal’s energy in a limited number of coefficients; (iii) the ability to map noise and signal to different areas in the Curvelet domain. So how do Curvelets obtain such a high non-linear approximation rate? Without being all inclusive [see for details 4, 2, 3, 6], the answer to this question lies in the fact that Curvelets are • multi-scale, i.e. they live in different dyadic corona (see for more detail [3] or the other contributions of the first author to the proceedings of this conference) in the FK-domain. • multi-directional, i.e. they live on wedges within these corona. See Fig. 1. • anisotropic, i.e. they obey the following scaling law width ∝ length2. • directional selective with # orientations ∝ 1 √ scale . • local both in (x, t)) and KF. • almost orthogonal, they are tight frames with a moderate redundancy. Contourlets implement the pseudo-inverse in closed-form while Curvelets provide the transform and its adjoint, yielding a pseudo-inverse computed by iterative Conjugate Gradients. c2 2 curvelet c2 basis functions

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Curvelet-domain preconditioned ’wave-equation’ depth-migration with sparseness & illumination constraints

A non-linear edge-preserving solution to the least-squares migration problem with sparseness & illumination constraints is proposed. The applied formalism explores Curvelets as basis functions. By virtue of their sparseness and locality, Curvelets not only reduce the dimensionality of the imaging problem but they also naturally lead to a dense preconditioning that almost diagonalizes the normal...

متن کامل

Curvelet-based non-linear adaptive subtraction with sparseness constraints

In this paper an overview is given on the application of directional basis functions, known under the name Curvelets/Contourlets, to various aspects of seismic processing and imaging, which involve adaptive subtraction. Key concepts in the approach are the use of (i) directional basis functions that localize in both domains (e.g. space and angle); (ii) non-linear estimation, which corresponds t...

متن کامل

Sparsity- and continuity-promoting seismic image recovery with curvelet frames

A nonlinear singularity-preserving solution to seismic image recovery with sparseness and continuity constraints is proposed. We observe that curvelets, as a directional frame expansion, lead to sparsity of seismic images and exhibit invariance under the normal operator of the linearized imaging problem. Based on this observation we derive a method for stable recovery of the migration amplitude...

متن کامل

3D gravity data-space inversion with sparseness and bound constraints

One of the most remarkable basis of the gravity data inversion is the recognition of sharp boundaries between an ore body and its host rocks during the interpretation step. Therefore, in this work, it is attempted to develop an inversion approach to determine a 3D density distribution that produces a given gravity anomaly. The subsurface model consists of a 3D rectangular prisms of known sizes ...

متن کامل

Sparseness - constrained seismic deconvolution with Curvelets

Continuity along reflectors in seismic images is used via Curvelet representation to stabilize the convolution operator inversion. The Curvelet transform is a new multiscale transform that provides sparse representations for images that comprise smooth objects separated by piece-wise smooth discontinuities (e.g. seismic images). Our iterative Curvelet-regularized deconvolution algorithm combine...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2004